-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixed the issue #811 and #812, which leading the pipeline break #813
Conversation
Release 2.9.0
Release 2.10.0
Release 2.11.0
Release 2.12.0
This comment was marked as resolved.
This comment was marked as resolved.
Hi there, thanks for the PR. However, this is a very bad idea to retry for any error code. This can lead to a lot of confusion. I oppose that change. Line 18 in 8f139ce
I do not want to change the behavior that when there is a reproducible, justified error, a retry is launched for no reason. Thats why we do not retry everything just on default. In the overwhelming amount of cases thats just a waste of resources. If that change solves you problem thats great, but there is no need to change pipeline code instead you can achieve this also by using
or use more process targeted approach as in e.g. Lines 65 to 69 in 8f139ce
|
The code is encountering frequent issues. In this PR, step #811 focuses on renaming the files, and step #812 maps the filtered files. I verified the files after a retry, and the output remains consistent. There are no discrepancies in the files before and after applying this PR. The recurring issue is due to Azure's local environment and the process of extracting files from the data lake. This behavior will persist without the changes introduced here. Therefore, I strongly recommend including this PR to address the problem effectively. |
Thanks for explaining your reasoning. I do believe that the resulting files are fine and I acknowledge that this proposed change will solve the issue on Azure. But it will negatively affect the execution of the pipeline on any other platform (retry for all exit codes of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This issue seems related to the Azure infrastructure setup and is not general to all users of the pipeline, or really pipeline specific. I do not think that we should apply blanket retry
for all users like this.
Alternatives would be setting this on user configs instead, or better still adding to the azurebatch
institutional config as a pipeline-specific config.
The latter is already set up in this pipeline so no changes needed here, purely nf-core/configs repo. Need to create a new config file there and then include it here.
It's also possible that the Line 30 in 8f139ce
So something else that you could try in the azure-specific pipeline config is to assign more memory to these tasks. Then they might not fail in the first place and you may not need the |
Co-authored-by: Phil Ewels <[email protected]>
without setting maxRetries as 3 , just considering the errorStrategy Co-authored-by: Phil Ewels <[email protected]>
@d4straub - suggest we close this PR and @Dedaniya08 you can open one to nf-core/configs with the relevant changes. |
Yes I agree @ewels , better solved in nf-core/configs. Thanks a lot! |
PR checklist
nf-core pipelines lint
).nextflow run . -profile test,docker --outdir <OUTDIR>
).nextflow run . -profile debug,test,docker --outdir <OUTDIR>
).docs/usage.md
is updated.docs/output.md
is updated.CHANGELOG.md
is updated.README.md
is updated (including new tool citations and authors/contributors).